- Detect and prevent an xref table/stream at a certain offset from being read twice; malformed xref tables with circular references could otherwise cause the table-reading to loop forever.
- Another approach could be to prevent TryReadTableAtOffset from changing the bytes' CurrentOffset to the lastObjPosition in its attempt to read a table (eg restore CurrentOffset after the attempt to read a Table) so the outer bytes-loop could continue its search through the entire bytes unaffected.
Including the stream-xref means that the formerly missing font is no longer missing, so simply run the two test-cases under the (stricter) assumption of SkipMissingFonts=false.
With the fix for including associated streams, this test now finds more text on the first page. I've verified using Aspose.PDF and by viewing the ErcotFacts.pdf file being tested that yes, it was indeed missing part of the text before.
- If an XrefTable has an associated stream, as indicated via the XrefStm-property, then read and add that XrefStream
- Any table can have 0 or 1 such associated streams
- A caveat: such an associated stream might also theoretically be part of the Parts-sequence in which case it would be encountered both by looping through all those parts along with all the regular tables and now also by association to any of those tables. It doesn't seem harmful since the offsets are flattened eventually anyway and stored by their offset-key into a mapping-table.
On a large sample of pdf-files PdfPig failed to read the correct StructTree-object for about 1% of them. The StructTree object was simply missing in the CrossReferenceTable.CrossReferenceTable.
It turned out that the constructed CrossReferenceTable could miss Stream-parts if there were multiple Table-parts because a stream will only be added if it's associated with the very first Table-part. The remedy would seem to be to check for and add streams that are associated with any of the Table-parts, not just the first one.
On a sample of 72 files where this failed, this changed fixed the StructTree for all of them.
* read last line of ignore file
- do not cancel other matrix jobs if one test fails
- read all lines of the ignore list even if it doesn't end with a newline
- add ignore list for 0008 and 0009
* support missing object numbers when brute-forcing
the file 10404 (ironically) contains not found references with number 43 0
for its info dictionary. changes brute-force code so that objects can be
entirely missing
* fix test since document is now opened successfully but mediabox is broken
in document 10122 the font and xobject names are the same so the
xobject overwrote references to the font for the page content, separate
the dictionaries
this file cotains corrupt content following an inline image but other parsers
just treat this content as part of the image and parse the rest of the file
successfully
* remove alpha postfix, releases will increment version
* update the master build job to draft a release
* add publish action to publish full release
* enable setting assembly and file version
* bump assembly and file version for package project
---------
Co-authored-by: BobLd <38405645+BobLd@users.noreply.github.com>
* add test for filebufferingreadstream
* #1124 do not trust reported stream length if bytes can be read at end
the filebufferingreadstream input stream does not report more than the read
length. the change to seek the xref in a sliding window from the end broke
with the assumption that the reported length was correct. here we switch to
reading the window or continue reading if we can read beyond the stream's
initially reported length while seeking the startxref marker
* remove rogue newlines
* move file parsing to single-pass static methods
for the file 0002973.pdf in the test corpus we need to completely overhaul
how initial xref parsing is done since we need to locate the xref stream by
brute-force and this is currently broken. i wanted to take this opportunity to
change the logic to be more imperative and less like the pdfbox methods with
instance data and classes.
currently the logic is split between the xref offset validator and parser methods
and we call the validator logic twice, followed by brute-force searching again
in the actual parser. we're going to move to a single method that performs
the following steps:
1. find the first (from the end) occurrence of "startxref" and pull out the location
in bytes. this will also support "startref" since some files in the wild have that
2. go to that offset if found and parse the chain of tables or streams by /prev
reference
3. if any element in step 2 fails then we perform a single brute-force over the
entire file and like pdfbox treat later in file-length xrefs as the ultimate arbiter
of the object positions. while we do this we potentially can capture the actual
object offsets since the xref positions are probably incorrect too.
the aim with this is to avoid as much seeking and re-reading of bytes as
possible. while this won't technically be single-pass it gets us much closer. it
also removes the more strict logic requiring a "startxref" token to exist and be
valid, since we can repair this by brute-force anyway.
we will surface as much information as possible from the static method so that
we could in future support an object explorer ui for pdfs.
this will also be more resilient to invalid xref formats with e.g. comment tokens
or missing newlines.
* move more parsing to the static classes
* plumb through the new parsing results
* plug in new parser and remove old classes, port tests to new classes
* update tests to reflect logic changes
* apply correction when file header has offset
* ignore console runner launch settings
* skip offsets outside of file bounds
* fix parsing tables missing a line break
* use brute forced locations if they're already present
* only treat line breaks and spaces as whitespace for stream content
* address review comments
---------
Co-authored-by: BobLd <38405645+BobLd@users.noreply.github.com>
* Refactor letter handling by orientation for efficiency
Improved the processing of letters based on their text orientation by preallocating separate lists for each orientation (horizontal, rotate270, rotate180, rotate90, and other). This change reduces multiple calls to `GetWords` and minimizes enumerations and allocations, enhancing performance and readability. Each letter is now added to the appropriate list in a single iteration over the `letters` collection.
* Update target frameworks to include net9.0
Expanded compatibility in `UglyToad.PdfPig.csproj` by adding
`net9.0` to the list of target frameworks, alongside existing
versions.
* Add .NET 9.0 support and refactor key components
Updated project files for UglyToad.PdfPig to target .NET 9.0, enhancing compatibility with the latest framework features.
Refactored `GetBlocks` in `DocstrumBoundingBoxes.cs` for improved input handling and performance.
Significantly optimized `NearestNeighbourWordExtractor.cs` by replacing multiple lists with an array of buckets and implementing parallel processing for better efficiency.
Consistent updates across `Fonts`, `Tests`, `Tokenization`, and `Tokens` project files to include .NET 9.0 support.
* Improve null checks and optimize list handling
- Updated null check for `words` in `DocstrumBoundingBoxes.cs` for better readability and performance.
- Changed from `ToList()` to `ToArray()` to avoid unnecessary enumeration.
- Added `results.TrimExcess()` in `NearestNeighbourWordExtractor.cs` to optimize memory usage.
---------
Co-authored-by: Chuck Beasley <CBeasley@kilpatricktownsend.com>
as long as there is a pages entry we accept this in lenient parsing mode. this
is to fix document 006705.pdf in the corpus that had '/calalog' as the dictionary
entry.
also adds a test for some weird content stream content in 0006324.pdf where
numbers seem to get split in the content stream on a decimal place. this is
just to check that our parser doesn't hard crash
* update readme to avoid people using `page.Text` or asking about editing docs
we need to be more clear because beloved chat gpt falls into the trap of
recommending `page.Text` when asked about the library even though this
text is usually the wrong field to use
* tabs to spaces
* rogue tab