Cleanup | Consolidate connection capability metadata#3862
Cleanup | Consolidate connection capability metadata#3862edwardneal wants to merge 27 commits intodotnet:mainfrom
Conversation
This class parses FEATUREEXTACK and LOGINACK streams, updating an object which specifies the capabilities of a connection.
This also means that we no longer need to hold the original SqlLoginAck record in memory.
This is adjacent cleanup which is best kept separate from the next step.
This includes the now-redundant checking (and associated exception) of the TDS version, and the assignment to _is20XX. Remove the now-redundant _is20XX fields.
We only ever use two of these versions, and they're based on TDS versions rather than SQL Server versions. Convert the Major/Minor/Increment bit-shifting to a constant value for clarity, and associate it with ConnectionCapabilities to eliminate duplication. Also add explanatory comment to describe reason for big-endian vs. little-endian reads.
Move UTF8 feature detection handling to ConnectionCapabilities.
This was always equal to !Capability.DnsCaching.
This enables the if condition from OnFeatureExtAck to continue to apply to capability processing. Also remove now-redundant comments, and clean up one reference to IsSQLDNSCachingSupported.
This is never persisted, and eliminates an allocation
One missing validation path. Also switched conditions to use pattern matching and to better align with original method for easier comparison and better codegen.
|
This pull request has been marked as stale due to inactivity for more than 30 days. If you would like to keep this pull request open, please provide an update or respond to any comments. Otherwise, it will be closed automatically in 7 days. |
|
This PR is not stale. |
|
@edwardneal - Can you resolve the conflicts? I'll kick a new PR run once that's done. |
|
/azp run |
|
Azure Pipelines successfully started running 2 pipeline(s). |
|
@edwardneal - The PR pipeline issues on main have been fixed. Can you re-merge this and your owht Cleanup PRs? I'll get the pipelines started on all of them. Sorry! |
|
Thanks @paulmedynski. I've just merged main into this PR. I've just merged main into the other two cleanup PRs. |
|
/azp run |
|
Azure Pipelines successfully started running 2 pipeline(s). |
There was a problem hiding this comment.
I'm ok to proceed with this, but something about it rubs me the wrong way, and I think it's that there's no way to know when the class is fully populated. We have two separate "process" methods that need to be called before you can read from the class's properties. If you do read before it's fully populated you'll get different values out. That said, the same critique applies to what we have now. I think this is a step in the right direction, but want to keep iterating on how to have strong guardrails during token processing so that it's hard to make mistakes.
Will take another look when the enhanced routing support is incorporated!
| /// Indicates support for user-defined CLR types (up to a length of 8000 | ||
| /// bytes.) This was introduced in SQL Server 2005. | ||
| /// </summary> | ||
| public bool UserDefinedTypes => true; |
There was a problem hiding this comment.
Note to self: expecting to see these get cleaned up
| JsonType = !featureData.IsEmpty && featureData[0] != 0x00 | ||
| && featureData[0] <= TdsEnums.MAX_SUPPORTED_JSON_VERSION; | ||
| break; | ||
|
|
There was a problem hiding this comment.
missing a case for enhanced routing
| // @TODO: This feature is *far* too big, and has the same issues as the above OnEnvChange | ||
| // @TODO: Consider individual callbacks for the supported features and perhaps an interface of feature callbacks. Or registering with the parser what features are handleable. | ||
| internal bool ShouldProcessFeatureExtAck(byte featureId) => | ||
| RoutingInfo is null || featureId is TdsEnums.FEATUREEXT_SQLDNSCACHING; |
There was a problem hiding this comment.
needs updating for enhanced routing
There was a problem hiding this comment.
This check needs to include enhanced routing as an "or" condition. We get envchange tokens before feature ext tokens, so if we don't have a special case for it we won't process the enhanced routing ack.
There was a problem hiding this comment.
This condition is already in the latest version of the file (L1265.)
d3feb15 to
a67a8a8
Compare
|
Thanks @mdaigle - I've moved the enhanced routing detection. I think part of the issue here is that the LOGINACK and the FEATUREEXTACK tokens are treated independently in TdsParser. The logic to read these tokens from the wire is blended with the parsing and processing, and that flexibility has meant that anything downstream has to assume it can receive those tokens in any order; it never knows precisely when the item has finished processing, so we can't strictly guarantee that anything will be read-only. If we could get to grips with the message sequencing, start to become more stringent about the order we expect to receive token streams from the server and start to factor that parsing out from TdsParser, I'd quite like to have some form of "capability builder" class which accepts a single LOGINACK token, any number of FEATUREEXTACK tokens and constructs a static ConnectionCapabilities instance for downstream call sites. |
Description
This PR emerges in part from discussion with @paulmedynski on #3858. We noticed that the
jsondatatype wouldn't appear when runningSqlConnection.GetSchema("DataTypes")against an Azure SQL instance. This is because of an underlying design choice in SqlMetaDataFactory: it only uses the SQL Server version number to control whether specific queries are used and specific records are returned. This isn't compatible with Azure SQL (which always returns a version of 12.x.) To fix this, we need SqlMetaDataFactory to be able to make decisions based upon the server capabilities, whether determined by the version, the presence of a FEATUREEXTACK token, or the contents of one of those tokens' values.In this PR, we introduce a new type,
ConnectionCapabilities. This class takes on the responsibility of parsingLOGINACKandFEATUREEXTACKtokens, then converting them into an object which SqlConnectionInternal, SqlMetaDataFactory and TdsParser can interrogate to check for feature availability. For good measure, I then plumb this object through to SqlMetaDataFactory. A subsequent PR can work out the best way to implement filtering.LOGINACK parsing is handled by the new object's
ProcessLoginAckmethod. This will look very different to the originalTryProcessLoginAckmethod in TdsParser; the latter had a great deal of legacy baggage from needing to deal with SQL Server 2000 compatibility, which we no longer need to carry.FEATUREEXTACK parsing is handled by
ProcessFeatureExtAck. I've kept this fairly close to the originalOnFeatureExtAckmethod in SqlConnectionInternal.Other points of errata:
SqlLoginAckinstance as a field on SqlConnectionInternal, which means that it never leaves the scope it was created in. I've turned it into a readonly ref struct.This is a comparatively large PR, so I've tried to make it practical to move commit-by-commit.
Issues
Builds on a conversation in #3858.
Prerequisite to #3833.
Testing
I'm planning to make the coverage more explicit with unit tests for each relevant FEATUREEXTACK, but the existing test coverage should be fine.
We already have coverage (ConnectionTests.ConnectionTestDeniedVersion and ConnectionTestPermittedVersion) which specifically tests the LOGINACK component. Existing test functionality should prove the FEATUREEXTACK parsing - if there are issues parsing the relevant FEATUREEXTACK structures, the AlwaysEncrypted/vector/JSON/etc. tests will fail.