Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory Leak : AggregatingVitalMonitor listeners are never unregistered #2008

Closed
joshskeen opened this issue Apr 24, 2024 · 3 comments · Fixed by #2009
Closed

Memory Leak : AggregatingVitalMonitor listeners are never unregistered #2008

joshskeen opened this issue Apr 24, 2024 · 3 comments · Fixed by #2009
Assignees
Labels
bug Something isn't working

Comments

@joshskeen
Copy link

Describe the bug

in RumViewScope.init,

cpuVitalMonitor.register(cpuVitalListener)
memoryVitalMonitor.register(memoryVitalListener)
frameRateVitalMonitor.register(frameRateVitalListener)

is called, but the corresponding AggregatingVitalMonitor.unregister calls never happen.
The session-scoped AggregatingVitalMonitors hold a map of listeners, and the

private val listeners: MutableMap<VitalListener, VitalInfo> = mutableMapOf()
map continues to grow, since listeners are never unregistered when the view is stopped, causing a memory leak.

Reproduction steps

  • break on VitalMonitor.register and VitalMonitor.unregister while starting / stopping views. observe the map of AggregatingVitalMonitor.listeners continues to grow.

Logcat logs

No response

Expected behavior

Vital monitor listeners are unregistered from the listeners map when a view is stopped.

Affected SDK versions

2.8.0

Latest working SDK version

NA

Did you confirm if the latest SDK version fixes the bug?

Yes

Kotlin / Java version

No response

Gradle / AGP version

No response

Other dependencies versions

No response

Device Information

No response

Other relevant information

No response

@joshskeen joshskeen added the bug Something isn't working label Apr 24, 2024
@xgouchet
Copy link
Member

Thanks a lot @joshskeen for raising this issue, we'll deal with it as soon as possible

@xgouchet xgouchet self-assigned this Apr 24, 2024
@pyricau
Copy link
Contributor

pyricau commented Apr 24, 2024

A little additional context: we originally found this by leveraging a new LeakCanary feature (WIP) that helps detect objects that are repeatedly growing. The tool surfaced the below trace and then @joshskeen & team investigated and found the exact issue.

┬───
│ GcRoot(StickyClass) (9500 objects)
│
├─STATIC_FIELD FontsContract.sContext -> instance of com.squareup.development.PosDevApp (1 objects)
│
├─INSTANCE_FIELD Application.mActivityLifecycleCallbacks -> instance of java.util.ArrayList (1 objects)
│
├─ARRAY_ENTRY ArrayList.[x] -> instance of com.datadog.android.rum.internal.vitals.JankStatsActivityLifecycleListener (1 objects)
│
├─INSTANCE_FIELD JankStatsActivityLifecycleListener.vitalObserver -> instance of com.datadog.android.rum.internal.vitals.AggregatingVitalMonitor (1 objects)
│
╰→INSTANCE_FIELD AggregatingVitalMonitor.listeners -> instance of java.util.LinkedHashMap (1 objects)
    Children:
    118 objects (8 new): ARRAY_ENTRY LinkedHashMap.[x] -> instance of com.datadog.android.rum.internal.domain.scope.RumViewScope$frameRateVitalListener$1
    118 objects (8 new): ARRAY_ENTRY LinkedHashMap.[x] -> instance of com.datadog.android.rum.internal.vitals.VitalInfo

@xgouchet
Copy link
Member

Thanks a lot for the hint @pyricau , that's really useful indeed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants