Skip to content

Commit 8851c82

Browse files
committed
Ensure litgo is gofmt-friendly
gofmt indents already-idented comments with an extra tab, so we need to strip that when outputting to markdown. Otherwise, those text chunks become markdown indented code blocks. This also runs gofmt across the sample code base.
1 parent b1893ed commit 8851c82

File tree

5 files changed

+160
-146
lines changed

5 files changed

+160
-146
lines changed

docs/book/src/cronjob-tutorial/testdata/emptycontroller.go

-1
Original file line numberDiff line numberDiff line change
@@ -91,7 +91,6 @@ when the manager is started.
9191
For now, we just note that this reconciler operates on `CronJob`s. Later,
9292
we'll use this to mark that we care about related objects as well.
9393
94-
TODO: jump back to main?
9594
*/
9695

9796
func (r *CronJobReconciler) SetupWithManager(mgr ctrl.Manager) error {

docs/book/src/cronjob-tutorial/testdata/project/controllers/cronjob_controller.go

+71-71
Original file line numberDiff line numberDiff line change
@@ -108,16 +108,16 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
108108
log := r.Log.WithValues("cronjob", req.NamespacedName)
109109

110110
/*
111-
### 1: Load the CronJob by name
111+
### 1: Load the CronJob by name
112112
113-
We'll fetch the CronJob using our client. All client methods take a
114-
context (to allow for cancellation) as their first argument, and the object
115-
in question as their last. Get is a bit special, in that it takes a
116-
[`NamespacedName`](https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client#ObjectKey)
117-
as the middle argument (most don't have a middle argument, as we'll see
118-
below).
113+
We'll fetch the CronJob using our client. All client methods take a
114+
context (to allow for cancellation) as their first argument, and the object
115+
in question as their last. Get is a bit special, in that it takes a
116+
[`NamespacedName`](https://godoc.org/sigs.k8s.io/controller-runtime/pkg/client#ObjectKey)
117+
as the middle argument (most don't have a middle argument, as we'll see
118+
below).
119119
120-
Many client methods also take variadic options at the end.
120+
Many client methods also take variadic options at the end.
121121
*/
122122
var cronJob batch.CronJob
123123
if err := r.Get(ctx, req.NamespacedName, &cronJob); err != nil {
@@ -129,11 +129,11 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
129129
}
130130

131131
/*
132-
### 2: List all active jobs, and update the status
132+
### 2: List all active jobs, and update the status
133133
134-
To fully update our status, we'll need to list all child jobs in this namespace that belong to this CronJob.
135-
Similarly to Get, we can use the List method to list the child jobs. Notice that we use variadic options to
136-
set the namespace and field match (which is actually an index lookup that we set up below).
134+
To fully update our status, we'll need to list all child jobs in this namespace that belong to this CronJob.
135+
Similarly to Get, we can use the List method to list the child jobs. Notice that we use variadic options to
136+
set the namespace and field match (which is actually an index lookup that we set up below).
137137
*/
138138
var childJobs kbatch.JobList
139139
if err := r.List(ctx, &childJobs, client.InNamespace(req.Namespace), client.MatchingField(jobOwnerKey, req.Name)); err != nil {
@@ -142,15 +142,15 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
142142
}
143143

144144
/*
145-
Once we have all the jobs we own, we'll split them into active, successful,
146-
and failed jobs, keeping track of the most recent run so that we can record it
147-
in status. Remember, status should be able to be reconstituted from the state
148-
of the world, so it's generally not a good idea to read from the status of the
149-
root object. Instead, you should reconstruct it every run. That's what we'll
150-
do here.
151-
152-
We can check if a job is "finished" and whether it succeeded or failed using status
153-
conditions. We'll put that logic in a helper to make our code cleaner.
145+
Once we have all the jobs we own, we'll split them into active, successful,
146+
and failed jobs, keeping track of the most recent run so that we can record it
147+
in status. Remember, status should be able to be reconstituted from the state
148+
of the world, so it's generally not a good idea to read from the status of the
149+
root object. Instead, you should reconstruct it every run. That's what we'll
150+
do here.
151+
152+
We can check if a job is "finished" and whether it succeeded or failed using status
153+
conditions. We'll put that logic in a helper to make our code cleaner.
154154
*/
155155

156156
// find the active list of jobs
@@ -160,9 +160,9 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
160160
var mostRecentTime *time.Time // find the last run so we can update the status
161161

162162
/*
163-
We consider a job "finished" if it has a "succeeded" or "failed" condition marked as true.
164-
Status conditions allow us to add extensible status information to our objects that other
165-
humans and controllers can examine to check things like completion and health.
163+
We consider a job "finished" if it has a "succeeded" or "failed" condition marked as true.
164+
Status conditions allow us to add extensible status information to our objects that other
165+
humans and controllers can examine to check things like completion and health.
166166
*/
167167
isJobFinished := func(job *kbatch.Job) (bool, kbatch.JobConditionType) {
168168
for _, c := range job.Status.Conditions {
@@ -176,8 +176,8 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
176176
// +kubebuilder:docs-gen:collapse=isJobFinished
177177

178178
/*
179-
We'll use a helper to extract the scheduled time from the annotation that
180-
we added during job creation.
179+
We'll use a helper to extract the scheduled time from the annotation that
180+
we added during job creation.
181181
*/
182182
getScheduledTimeForJob := func(job *kbatch.Job) (*time.Time, error) {
183183
timeRaw := job.Annotations[scheduledTimeAnnotation]
@@ -236,35 +236,35 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
236236
}
237237

238238
/*
239-
Here, we'll log how many jobs we observed at a slightly higher logging level,
240-
for debugging. Notice how instead of using a format string, we use a fixed message,
241-
and attach key-value pairs with the extra information. This makes it easier to
242-
filter and query log lines.
239+
Here, we'll log how many jobs we observed at a slightly higher logging level,
240+
for debugging. Notice how instead of using a format string, we use a fixed message,
241+
and attach key-value pairs with the extra information. This makes it easier to
242+
filter and query log lines.
243243
*/
244244
log.V(1).Info("job count", "active jobs", len(activeJobs), "successful jobs", len(successfulJobs), "failed jobs", len(failedJobs))
245245

246246
/*
247-
Using the date we've gathered, we'll update the status of our CRD.
248-
Just like before, we use our client. To specifically update the status
249-
subresource, we'll use the `Status` part of the client, with the `Update`
250-
method.
247+
Using the date we've gathered, we'll update the status of our CRD.
248+
Just like before, we use our client. To specifically update the status
249+
subresource, we'll use the `Status` part of the client, with the `Update`
250+
method.
251251
252-
The status subresource ignores changes to spec, so it's less likely to conflict
253-
with any other updates, and can have separate permissions.
252+
The status subresource ignores changes to spec, so it's less likely to conflict
253+
with any other updates, and can have separate permissions.
254254
*/
255255
if err := r.Status().Update(ctx, &cronJob); err != nil {
256256
log.Error(err, "unable to update CronJob status")
257257
return ctrl.Result{}, err
258258
}
259259

260260
/*
261-
Once we've updated our status, we can move on to ensuring that the status of
262-
the world matches what we want in our spec.
261+
Once we've updated our status, we can move on to ensuring that the status of
262+
the world matches what we want in our spec.
263263
264-
### 3: Clean up old jobs according to the history limit
264+
### 3: Clean up old jobs according to the history limit
265265
266-
First, we'll try to clean up old jobs, so that we don't leave too many lying
267-
around.
266+
First, we'll try to clean up old jobs, so that we don't leave too many lying
267+
around.
268268
*/
269269

270270
// NB: deleting these is "best effort" -- if we fail on a particular one,
@@ -316,22 +316,22 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
316316
}
317317

318318
/*
319-
### 5: Get the next scheduled run
319+
### 5: Get the next scheduled run
320320
321-
If we're not paused, we'll need to calculate the next scheduled run, and whether
322-
or not we've got a run that we haven't processed yet.
321+
If we're not paused, we'll need to calculate the next scheduled run, and whether
322+
or not we've got a run that we haven't processed yet.
323323
*/
324324

325325
/*
326-
We'll calculate the next scheduled time using our helpful cron library.
327-
We'll start calculating appropriate times from our last run, or the creation
328-
of the CronJob if we can't find a last run.
326+
We'll calculate the next scheduled time using our helpful cron library.
327+
We'll start calculating appropriate times from our last run, or the creation
328+
of the CronJob if we can't find a last run.
329329
330-
If there are too many missed runs and we don't have any deadlines set, we'll
331-
bail so that we don't cause issues on controller restarts or wedges.
330+
If there are too many missed runs and we don't have any deadlines set, we'll
331+
bail so that we don't cause issues on controller restarts or wedges.
332332
333-
Otherwise, we'll just return the missed runs (of which we'll just use the latest),
334-
and the next run, so that we can know when it's time to reconcile again.
333+
Otherwise, we'll just return the missed runs (of which we'll just use the latest),
334+
and the next run, so that we can know when it's time to reconcile again.
335335
*/
336336
getNextSchedule := func(cronJob *batch.CronJob, now time.Time) (lastMissed *time.Time, next time.Time, err error) {
337337
sched, err := cron.ParseStandard(cronJob.Spec.Schedule)
@@ -398,16 +398,16 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
398398
}
399399

400400
/*
401-
We'll prep our eventual request to requeue until the next job, and then figure
402-
out if we actually need to run.
401+
We'll prep our eventual request to requeue until the next job, and then figure
402+
out if we actually need to run.
403403
*/
404404
scheduledResult := ctrl.Result{RequeueAfter: nextRun.Sub(r.Now())} // save this so we can re-use it elsewhere
405405
log = log.WithValues("now", r.Now(), "next run", nextRun)
406406

407407
/*
408-
### 6: Run a new job if it's on schedule, not past the deadline, and not blocked by our concurrency policy
408+
### 6: Run a new job if it's on schedule, not past the deadline, and not blocked by our concurrency policy
409409
410-
If we've missed a run, and we're still within the deadline to start it, we'll need to run a job.
410+
If we've missed a run, and we're still within the deadline to start it, we'll need to run a job.
411411
*/
412412
if missedRun == nil {
413413
log.V(1).Info("no upcoming scheduled times, sleeping until next")
@@ -427,9 +427,9 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
427427
}
428428

429429
/*
430-
If we actually have to run a job, we'll need to either wait till existing ones finish,
431-
replace the existing ones, or just add new ones. If our information is out of date due
432-
to cache delay, we'll get a requeue when we get up-to-date information.
430+
If we actually have to run a job, we'll need to either wait till existing ones finish,
431+
replace the existing ones, or just add new ones. If our information is out of date due
432+
to cache delay, we'll get a requeue when we get up-to-date information.
433433
*/
434434
// figure out how to run this job -- concurrency policy might forbid us from running
435435
// multiple at the same time...
@@ -450,19 +450,19 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
450450
}
451451

452452
/*
453-
Once we've figured out what to do with existing jobs, we'll actually create our desired job
453+
Once we've figured out what to do with existing jobs, we'll actually create our desired job
454454
*/
455455

456456
/*
457-
We need to construct a job based on our CronJob's template. We'll copy over the spec
458-
from the template and copy some basic object meta.
457+
We need to construct a job based on our CronJob's template. We'll copy over the spec
458+
from the template and copy some basic object meta.
459459
460-
Then, we'll set the "scheduled time" annotation so that we can reconstitute our
461-
`LastScheduleTime` field each reconcile.
460+
Then, we'll set the "scheduled time" annotation so that we can reconstitute our
461+
`LastScheduleTime` field each reconcile.
462462
463-
Finally, we'll need to set an owner reference. This allows the Kubernetes garbage collector
464-
to clean up jobs when we delete the CronJob, and allows controller-runtime to figure out
465-
which cronjob needs to be reconciled when a given job changes (is added, deleted, completes, etc).
463+
Finally, we'll need to set an owner reference. This allows the Kubernetes garbage collector
464+
to clean up jobs when we delete the CronJob, and allows controller-runtime to figure out
465+
which cronjob needs to be reconciled when a given job changes (is added, deleted, completes, etc).
466466
*/
467467
constructJobForCronJob := func(cronJob *batch.CronJob, scheduledTime time.Time) (*kbatch.Job, error) {
468468
// We want job names for a given nominal start time to have a deterministic name to avoid the same job being created twice
@@ -509,12 +509,12 @@ func (r *CronJobReconciler) Reconcile(req ctrl.Request) (ctrl.Result, error) {
509509
log.V(1).Info("created Job for CronJob run", "job", job)
510510

511511
/*
512-
### 7: Requeue when we either see a running job or it's time for the next scheduled run
512+
### 7: Requeue when we either see a running job or it's time for the next scheduled run
513513
514-
Finally, we'll return the result that we prepped above, that says we want to requeue
515-
when our next run would need to occur. This is taken as a maximum deadline -- if something
516-
else changes in between, like our job starts or finishes, we get modified, etc, we might
517-
reconcile again sooner.
514+
Finally, we'll return the result that we prepped above, that says we want to requeue
515+
when our next run would need to occur. This is taken as a maximum deadline -- if something
516+
else changes in between, like our job starts or finishes, we get modified, etc, we might
517+
reconcile again sooner.
518518
*/
519519
// we'll requeue once we see the running job, and update our status
520520
return scheduledResult, nil

docs/book/src/multiversion-tutorial/deployment.md

+3-2
Original file line numberDiff line numberDiff line change
@@ -84,7 +84,8 @@ respectively. Notice that each has a different API version.
8484
Finally, if we wait a bit, we should notice that our CronJob continues to
8585
reconcile, even though our controller is written against our v1 API version.
8686

87-
## Troubleshooting
88-
TODO(../TODO.md) steps for troubleshoting
87+
## Troubleshooting
88+
89+
[steps for troubleshooting](/TODO.md)
8990

9091
[ref-multiver]: /reference/generating-crd.md#multiple-versions "Generating CRDs: Multiple Versions"

0 commit comments

Comments
 (0)