Currently given an example like:
```http
GET /foo HTTP/1.1
Content-Type: application/json
User-Agent: foo
{"hello": "world"}
```
It will fail to highlight because the content type will get reset when processing the `User-Agent` header. The is particularly problematic because Go maps are randomly ordered and `http.Request` use a map for headers, meaning on some runs you could get highlighting and on others it would silently fail if you are generating the above from an `http.Request`.
This PR fixes it by resetting `isContentType` once we've read the actual content type, which prevents replacing it with later literals.
Treat ( and ) as text in lexer
Added test files and small change of lexer
Fixed 'err' messages in Chroma output
Removed postfix behind numbers
Was originally from C# lexer, but not needed for TradingView
Improved single comment, punctuation, and operator
Simplified text match, improved punctuation and operators
Add slash to punctuation
Added missing named variables
Added proper test data with .expected file
Added TradingView lexer
This commit refactors code from the markdown lexer into the chroma
package, and alters the PostgreSQL and CQL lexers to make use of it.
Additionally, an example markdown with the various sublexers is added.
C++ highlighting ignores class keyword if it is not followed by a space
template<class T> struct X; // ok
template<class> struct X; // fails
template<class...> struct X; // fails
X<class::Y> x; // fails
This is a lexer that is useful for templating languages, where the
surrounding text may be of a different syntax. eg. PHP+HTML
The PHP lexer has been changed accordingly.
Fixes#80
This was done to speed up incremental compilation when working on
lexers. That is, modifying a single lexer will no longer require
recompiling all lexers.
This is a (slightly) backwards breaking change in that lexers are no
longer exported directly in the lexers package. The registry API is
"aliased" at the old location.