Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Few updates #6

Merged
merged 14 commits into from
Feb 28, 2019
Merged

Few updates #6

merged 14 commits into from
Feb 28, 2019

Conversation

ahmelsayed
Copy link
Contributor

@ahmelsayed ahmelsayed commented Feb 20, 2019

apiVersion: kore.k8s.io/v1alpha1
kind: ScaledObject
metadata:
  name: _________
spec:
  scaleTargetRef:
    deploymentName: _______
  triggers:
  - type: azure-queue
    metadata:
      _______: _______

which removes secretRef from the ScaledObject object. the ScaleHandler just copies all of the env section to the scalers.

  • Update Gopkg.toml locking the versions of kubernetes to 1.13.3
  • Update vendor to include k8s.io/code-generator and k8s.io/gogen
  • Added instructions for running kore locally then publishing a functions project to your cluster. It should scale if there is an azure queue trigger in the functions project.

CI builds each branch and then pushes the results to

projectkore.azurecr.io/project-kore/kore:{branch-name}

so for example this PR will create projectkore.azurecr.io/project-kore/kore:ahmels-work

There are no tests yet, so the CI isn't running anything. It's just building the project and building a docker image for it and pushing it to ACR projectkore.azurecr.io

// workaround go dep management system
_ "k8s.io/code-generator/pkg/util"
_ "k8s.io/gengo/parser"
_ "golang.org/x/tools/imports"
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

these repos are needed in the vendor folder for hack/generate-groups.sh to work for auto-generating the client SDK for our custom ScaledObject types. This was the workaround I found people doing in Go to workaround dep removing referenced dependencies from the vendor folder. I'm not sure if there is a better way.

@@ -16,7 +22,8 @@ func main() {
}

ctx := signals.Context()
controller.NewController(koreClient, kubeClient).Run(ctx)
scaleHandler := scalers.NewScaleHandler(koreClient, kubeClient)
controller.NewController(koreClient, kubeClient, scaleHandler).Run(ctx)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The ScaleHandler is the interface between the Controller and the Adapter on one end, and the individual scalers on the other.

Currently the interface for it is just

HandleScale(scaledObject *kore_v1alpha1.ScaledObject)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine

// used for polling external systems like we have with autoscalers.
// This however makes it not possible to have a custom check interval
// per ScaledObject or deployment.
time.Second*30,
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mentioned this in one of the emails. Looking at some online discussion linked above, it seems that resyncPeriod is there for the Informer to playback all of its cache to the UpdateFunc handler. This will simplify things as we will just rely on the Informer cache to playback all the ScaledObjects it has every 30 seconds, and we can check the respective scalers for events.

The alternative is to build our own queue/channel and keep a copy of the Informer cache state ourselves in the ScaleHandler and maintain our own timer for enqueuing the ScaledObject into the work queue. We will also have to handle the DeleteFunc properly with some shared collection that we take a RWLock on, I think. Right?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see a reason to use resync. our requirements are to get updated on every CRD update/added events. I don't see why we would need to replay the cache?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will just use the cache/resyncPeriod instead of implementing our own store for these ScaledObjects and our own timer for checking them.

Basically we will need to store these ScaledObjects in a shared collection and do proper RWLock for adding, updating, or removing ScaledObjects, then have our own timer to check their trigger status for scale up/down.

Configuring the resyncPeriod shifts that responsibility to the Informer as it'll keep pinging our code every 30 seconds in this example instead of us maintaining our own timer. Would you rather we keep our own copy and timer for checking these triggers?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd rather we keep our own copy and checking these triggers. We need to be able to configurably change the interval to accommodate for various latency SLAs

}

// always call syncScaledObject even on resync
// this uses the informer cache for updates rather than maintaining another cache
c.syncScaledObject(newObj)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

make sure to always call syncScaledObject on every UpdateFunc regardless of the ResourceVersion based on the above comment.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should invoke syncScaledObject only if the spec has changed - do you see any other reason?


log.Infof("Starting autoscalers for: %s. target deployment: %s", scaledObject.GetName(), deployment.GetName())
go c.scaleHandler.HandleScale(scaledObject)
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there is no need to block the controller on the ScaleHandler logic. This can happen async for every ScaledObject

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we should care about order of operations, no? this is why the lock is in place. the goroutine defeats that purpose.

// scaledObjectCopy.Status.LastScaleTime = &currentTime
// scaledObjectCopy.Status.LastActiveTime = &currentTime
// scaledObjectCopy.Status.CurrentReplicas = *deploymentCopy.Spec.Replicas
// scaledObjectCopy.Status.DesiredReplicas = scaleDecision
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

For whatever reason, I couldn't get the Status of a ScaledObject to update. Everytime I'd call Update the next time it'll still be empty. Not sure why yet

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

better to replace the if-else of getScaler with switch sooner rather than after :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

will do

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

About the status object not updating - can you try updating the state of the original object prior to the copy and see if it persists?

@ahmelsayed ahmelsayed merged commit fc7c8ef into master Feb 28, 2019
dennis-brinley added a commit to dennis-brinley/keda that referenced this pull request Jul 13, 2021
scale-handler.go:
- Changed scaler ID to "solace-event-queue"
solace_scaler.go:
- changed metadata identifer prefixes from "msg" to "message" where applicable"
- added multiplier to messageSpoolUsageTarget (1024^2) so metadata values are expressed in MBytes
- updated some var names (camel case not followed after re-name)
solace_scaler_test.go
- Mods to reflect change in scaler code
solace-helpers.ts
- Mod to test procedure to allow message size to be specified for test publisher (test message spool usage)
- Mod to trigger/scaler name in scaledobject config
solace.tests.ts
- added tests to scale up/down based on message spool usage (kedacore#5 and kedacore#6)

(a) The contribution was created in whole or in part by me and I have the right to submit it under the open source license indicated in the file; or

(b) The contribution is based upon previous work that, to the best of my knowledge, is covered under an appropriate open source license and I have the right under that license to submit that work with modifications, whether created in whole or in part by me, under the same open source license (unless I am permitted to submit under a different license), as indicated in the file; or

(c) The contribution was provided directly to me by some other person who certified (a), (b) or (c) and I have not modified it.

(d) I understand and agree that this project and the contribution are public and that a record of the contribution (including all personal information I submit with it, including my sign-off) is maintained indefinitely and may be redistributed consistent with this project or the open source license(s) involved.

Signed-off-by: Dennis Brinley <dennis.brinley@solace.com>
joelsmith pushed a commit to joelsmith/keda that referenced this pull request May 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants