-
Notifications
You must be signed in to change notification settings - Fork 379
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run integration tests in-process #385
Conversation
integration/ct_integration_test.go
Outdated
if err != nil { | ||
t.Fatalf("Failed to read log config: %v", err) | ||
} | ||
|
||
if err := runParallelTests(cfgs, *httpServersFlag, *testDir, *mmdFlag); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Recommend using t.Parallel() and the main test runner.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure I follow how to do that -- t.Parallel
appears to run different tests in parallel, whereas this is running multiple instances of the same test in parallel.
(BTW, the Travis builds include a variant with -race
set, so we can hopefully catch concurrency issues.)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you'd like to run the same test multiple times under different conditions, please use Go's "Subtests" feature. Each subtest can be run in parallel to speed things up using t.Parallel()
https://golang.org/pkg/testing/#hdr-Subtests_and_Sub_benchmarks
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Aha, that's the bit I was missing -- thanks for the pointer.
@@ -130,6 +130,11 @@ func (l LogOperationManager) getLogsAndExecutePass(ctx context.Context) bool { | |||
return false | |||
} | |||
|
|||
// OperationSingle performs a single pass of the manager. | |||
func (l LogOperationManager) OperationSingle() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like this :-)
func (l LogOperationManager) OperationSingle() { | ||
l.getLogsAndExecutePass(l.context.ctx) | ||
} | ||
|
||
// OperationLoop starts the manager working. It continues until told to exit. | ||
// TODO(Martin2112): No mechanism for error reporting etc., this is OK for v1 but needs work | ||
func (l LogOperationManager) OperationLoop() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Have Operation loop use OperationSingle
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OperationLoop
needs the quit
flag that getLogsAndExecutePass
returns, but that's not relevant for OperationSingle
, so I think it's clearer as-is.
testonly/integration/clientserver.go
Outdated
@@ -74,14 +83,13 @@ func listen() (string, net.Listener, error) { | |||
|
|||
// getTestDB drops and recreates the test database. | |||
// Returns a database connection to the test database. | |||
func getTestDB(testID string) (*sql.DB, error) { | |||
func getTestDB(testID string) (*sql.DB, string, error) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We were trying to isolate the db connection string away from other functions and just return a proper sql.DB
. Is there a reason why we can't maintain this abstraction?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, will do.
testonly/integration/clientserver.go
Outdated
if err != nil { | ||
return nil, err | ||
} | ||
|
||
return &LogEnv{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Prefer creating the returned structure at the end of the function rather than setting individual parameters throughout function. Helps with code clarity
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sync.WaitGroup
cannot be copied, so creating the structure at the end doesn't work for that, and it seemed better to be consistent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Scratch that, I can just store a pointer to the one true sync.WaitGroup
testonly/integration/clientserver.go
Outdated
sequencerWindow) | ||
sequencerTask := server.NewLogOperationManagerForTest(ctx, registry, | ||
batchSize, sleep, timeSource, sequencerManager) | ||
if numSequencers == 0 { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you'd like to test multiple sequencers, convert env.Sequencer into a slice.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's not how it works -- the existing code is somewhat confusing because it gives the name sequencerTask
to a LogOperationManager
object (of which there is only one). The LogOperationManager
will then spawn goroutines for the given count of sequencers.
testonly/integration/clientserver.go
Outdated
env.Sequencer = server.NewLogOperationManagerForTest(ctx, registry, | ||
batchSize, sleepBetweenRuns, timeSource, env.LogOperation) | ||
} else { | ||
// Start a live sequencer in a goroutine. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We don't want to launch any goroutines here because that introduces concurrency into tests.
Separate out the launching of sequencers into a separate function that individual tests can call.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Heh, the boat has sailed on avoiding concurrency. The existing code already launches one goroutine at the top level (go grpcServer.Serve(lis)
), and even with the test one-shot LogOperationManager
, there will be a bunch of goroutines spawned under the covers (see SequencerManager.ExecutePass
).
testonly/integration/clientserver.go
Outdated
} | ||
|
||
// LaunchCT adds a running CT personality to the LogEnv test environment. | ||
func (env *LogEnv) LaunchCT(cfgs []ct.LogConfig) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please move CT integration tests to a separate file. Perhaps to the example CT directory to enforce separation of personality specific test functions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'll move it into a separate file, but given that the directory is testonly/integration
I think the code belongs in this package for the time being.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a new CTLogEnv
on top of LogEnv
, in testonly/integration/ct.go
testonly/integration/clientserver.go
Outdated
} | ||
} | ||
// Wait long enough for a signer run to happen, so empty tree gets signed. | ||
time.Sleep(2 * sleepBetweenRuns) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Use a OS.Signal rather than sleep to avoid concurrency problems.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually, I think I can just delete this, it was left over from before I created OperationSingle
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, does OS.Signal
do anything clever with threads/goroutines? Because UNIX signals and threads traditionally don't mix well...
PTAL |
integration/ct_integration_test.go
Outdated
mmd := 120 * time.Second | ||
// Run a container for the parallel sub-tests, so that we wait until they | ||
// all complete before terminating the test environment. | ||
t.Run("container", func(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider using testing.Main to establish common setup and teardown code for each test.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, but there isn't common setup/teardown code -- it's different for different tests.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for these changes.
Consider also moving the semantics around the graceful shutdown of a LogOperationManager inside LogOperationManager.
integration/ct_integration_local.cfg
Outdated
@@ -0,0 +1,26 @@ | |||
[ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This log config file appears to only be used in the ct_integration test.
Consider moving this to be a const
inside the ct_integration_test.
The reason is that paths in Go are considered to be relative to the main or test package that is running.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done, but I don't understand the reason you mention (the expanded []ct.LogConfig
slice will also contain a bunch of path names, relative to the test).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Putting the paths inside the test file makes it clear that this config data is test only.
Just a bit of self documentation, a speedup (avoids a file.Open) and compile time checking of the data structure.
wg.Add(1) | ||
go func(c ctfe.LogConfig) { | ||
defer wg.Done() | ||
for _, cfg := range cfgs { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if cfgs weren't loaded from a file, this could be a proper table run test.
for _, test := range struct {
// test vars
} {
{ //test case },
} {
// test code
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done.
@@ -76,7 +77,6 @@ func listen() (string, net.Listener, error) { | |||
// Returns a database connection to the test database. | |||
func getTestDB(testID string) (*sql.DB, error) { | |||
var testDBURI = fmt.Sprintf("root@tcp(127.0.0.1:3306)/log_unittest_%v", testID) | |||
builtin.MySQLURIFlag = &testDBURI |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for fixing this hack
return nil, err | ||
} | ||
go grpcServer.Serve(lis) | ||
wg.Add(1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
GracefulStop
waits for the server to complete active RPCs.
This should eliminate the need to add this to the wait group, which I would love to avoid creating here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log.Close()
already calls grpcServer.GracefulStop()
, so I'm not sure what you're after here.
I think a wait group is absolutely essential when we're starting separate goroutines -- for example, to ensure that any shutdown failure shows up as a hang/test timeout rather than an obscure database locking issue.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I'm understanding you properly, you're saying that there should always be some kind of thread.Join logic for each go routine all the way up to the main thread. - Agreed.
I'm just saying that this wait group / thread.Join logic should be contained inside server.GracefulStop so we don't need to recreate it here.
sequencerTask = server.NewLogOperationManager(ctx2, registry, | ||
batchSize, numSequencers, sleepBetweenRuns, timesource, sequencerManager) | ||
wg.Add(1) | ||
go func(wg *sync.WaitGroup, om *server.LogOperationManager) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Rather than adding the sequencer to a wait group, is there a way we could create a GracefulStop
to the log operation manager as well?
It seems to me that the management of what sequencer runs are active etc, and how to do a proper shutdown should be internal to the log operation manager.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Probably a good idea, but rather beyond the scope of this PR.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added #391
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. If you don't mind, could you create and merge #391 first?
That would be a great help.
testonly/integration/clientserver.go
Outdated
sequencerManager := server.NewSequencerManager(keyManager, registry, sequencerWindow) | ||
var wg sync.WaitGroup | ||
var sequencerTask *server.LogOperationManager | ||
var cancel context.CancelFunc |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider moving the cancel semantics and wait group inside the LogOperationManager
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See comment below.
}, nil | ||
} | ||
|
||
// Close shuts down the server. | ||
func (env *LogEnv) Close() { | ||
if env.sequencerCancel != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Consider replacing with env.Sequencer.GracefulStop()
which should block until the sequencer has been stopped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As above.
This is a factory function to build an Extension that uses the provided database and KeyManager, rather than relying on global command-line flags.
Track running goroutines with a sync.WaitGroup to ensure proper completion. Add LogOperationManager.OperationSingle() for running a single pass, rather than relying on the 'test' instance just running once.
Please address #391 before merging this PR. |
1e3fa29
to
20a94bc
Compare
Rebased to fix merge conflict (now injecting a |
The whole point of raising a separate issue (#391) was that changing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think #391 should be addressed separately as it will be easier to review.
integration/log_integration_test.go
Outdated
|
||
func TestInProcessLogIntegration(t *testing.T) { | ||
ctx := context.Background() | ||
env, err := integration.NewLogEnv(ctx, 2, "TestInProcessLogIntegration") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull out magic number '2' as a const? Also possibly 6962.
Builds on LogEnv by also configuring a collection of CT logs and running a CT log personality in a goroutine.
20a94bc
to
63bcf64
Compare
…m 589b12611..55c1d0c85 8cc3a55af Add custom options to allow more control of swagger/openapi output (google#145) b0be3cdef runtime: fix chunk encoding 82b83c781 protoc-gen-swagger optional SourceCodeInfo 1fd8ba6a5 Fix logic handling primitive wrapper in URL params b2423da79 runtime: use r.Context() (google#473) c323909dd Add Handler method to pass in a client (google#454) ac41185c3 Fallback to JSON name when matching URL parameter. (google#450) 8bec008bd fix 2 typos in Registry.SetPrefix's comment de5a00fcc Reference Gulp by a more complete path 185dda2d4 Fix build. 824b9a716 Test with Go 1.9. f2862b476 Memoise calls to fullyQualifiedNameToSwaggerName to speed it up for large registries (google#421) 1a03ca3ba Update DO NOT EDIT template. (google#434) a5c7982c0 Update Swagger Codegen from 2.1.6 to 2.2.2 (google#415) d64f5319e ISSUE#405: customize the error return (google#409) c6f7a5ac6 improve {incoming,outgoing}HeaderMatcher logic (google#408) 2a40dd795 Return if runtime.AnnotateContext gave error (google#403) 47a11d786 jsonpb: update tests to reflect new jsonpb behavior (google#401) f6f92fcd9 Reference import grpc Status to suppress unused errors. (google#387) 979be44d9 fixes package name override doesn't work (google#277) b1e4aed16 Skip unreferenced messages in definitions. (google#371) ca4c8d6af ci: regen with current protoc-gen-go (google#385) 7195ea445 Use status package for error and introduce WithProtoErrorHandler option (google#378) 4539fc575 Add response headers from grpc server (google#374) 597c8c358 support allow_delete_body for protoc-gen-grpc-gateway (google#318) 55d0969c0 Use canonical header form in default header matcher. (google#369) 893772d22 Extend ServeMux to allow user configurable header forwarding. git-subtree-dir: vendor/github.com/grpc-ecosystem/grpc-gateway git-subtree-split: 55c1d0c857e5c6cadb0ee292f6cc36621cd5ea8c
Should be rather faster (e.g. 1.5s for CT test, 9s for log test)