Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

0.24.0 #881

Merged
merged 63 commits into from
Apr 15, 2019
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
63 commits
Select commit Hold shift + click to select a range
906b798
Bump version to 0.24.0
brettlangdon Mar 19, 2019
54161ff
[core] Guard against when there is no current call context (#852)
brettlangdon Mar 19, 2019
0b4df71
Use tox.ini checksum to update cache (#850)
majorgreys Mar 19, 2019
8ade9b3
[tests] Fix requests gevent tests (#854)
brettlangdon Mar 20, 2019
74bf295
[tests] Use spotify cassandra docker image (#855)
brettlangdon Mar 20, 2019
b06a725
Add script to build wheels (#853)
brettlangdon Mar 21, 2019
b4446bd
Ensure mkwheelhouse is installed
brettlangdon Mar 21, 2019
a1a4753
Roll back changes to mkwheelhouse and deploy_dev
brettlangdon Mar 21, 2019
e4cf48c
Do not run scripts/build-dist
brettlangdon Mar 21, 2019
c93632c
[tests] Do not test celery 4.2 with Kombu 4.4
jd Mar 28, 2019
cdb7cc9
Merge branch '0.24-dev' into fix-celery
majorgreys Mar 28, 2019
6ca68d1
Merge pull request #858 from jd/fix-celery
jd Mar 28, 2019
35ab7c5
[tests] Fix ddtrace sitecustomize negative test
jd Mar 27, 2019
613f781
[tests] Enable integration tests in docker-compose environment
jd Mar 29, 2019
cc03115
Merge pull request #857 from jd/fix-site-customize
jd Mar 29, 2019
c1f3a39
Replace mysql-connector 2.1 with mysql-connector-python
jd Apr 1, 2019
7c4858f
Merge pull request #866 from jd/fix-sqlalchemy-ci
jd Apr 1, 2019
a83684a
Update flake8 to 3.7 branch
jd Mar 27, 2019
f6e005a
[aiobotocore] Add support for versions up to 0.10.0
jd Mar 29, 2019
e02a7b3
Merge branch '0.24-dev' into dev-enable-test-integration
majorgreys Apr 2, 2019
5f28696
Merge pull request #865 from jd/upgrade-aiobotocore
jd Apr 2, 2019
c5116b1
Merge branch '0.24-dev' into update-flake8
jd Apr 2, 2019
bd8ed42
Merge branch '0.24-dev' into dev-enable-test-integration
jd Apr 2, 2019
d866deb
Add testing for Celery 4.3
jd Apr 2, 2019
a02449e
Add support for pytest4
jd Apr 2, 2019
1ef1221
Remove useless __future__ imports
jd Apr 2, 2019
7f3fbe9
Merge pull request #863 from jd/dev-enable-test-integration
jd Apr 3, 2019
8c30ea0
[core] Fix logging with unset DATADOG_PATCH_MODULES
jd Apr 3, 2019
9f2f78a
Merge pull request #872 from jd/fix-logging-issue-with-env-var
jd Apr 4, 2019
69cd82e
Merge branch '0.24-dev' into celery-43
majorgreys Apr 4, 2019
1c98d3b
Merge branch '0.24-dev' into remove-future-print
majorgreys Apr 4, 2019
909b4c4
Merge branch '0.24-dev' into update-flake8
majorgreys Apr 4, 2019
30466fa
Merge pull request #856 from jd/update-flake8
jd Apr 4, 2019
f275761
Merge branch '0.24-dev' into celery-43
jd Apr 4, 2019
a0d8055
Merge branch '0.24-dev' into remove-future-print
jd Apr 4, 2019
4d49b02
Merge branch '0.24-dev' into pytest4
jd Apr 4, 2019
0b091fb
Merge pull request #868 from jd/celery-43
jd Apr 4, 2019
a316d3e
Merge branch '0.24-dev' into remove-future-print
jd Apr 4, 2019
aef7569
Merge branch '0.24-dev' into pytest4
jd Apr 4, 2019
549899e
Merge pull request #871 from jd/remove-future-print
jd Apr 4, 2019
a911082
[testing] Remove nose usage
jd Apr 2, 2019
0ed5057
Remove confusing testing instructions from README
jd Apr 4, 2019
b5ba9d1
[aiohttp] Fix race condition in testing
jd Apr 5, 2019
87e49e2
[tests] Add support for aiohttp up to 3.5
jd Apr 4, 2019
deecab3
[core] Use DEBUG log level for RateSampler initialization (#861)
bmurphey Apr 9, 2019
c8137aa
Merge branch '0.24-dev' into pytest4
jd Apr 9, 2019
38ba8de
Merge pull request #869 from jd/pytest4
jd Apr 9, 2019
aa68c3e
Merge branch '0.24-dev' into aiohttp-upgrade
jd Apr 9, 2019
364b626
Merge pull request #873 from jd/aiohttp-upgrade
jd Apr 9, 2019
05bf5b5
Merge branch '0.24-dev' into update-readme-testing
jd Apr 9, 2019
8fb4ebd
Merge branch '0.24-dev' into fix-aiohttp-test-race
jd Apr 9, 2019
571d77e
Merge pull request #874 from jd/update-readme-testing
jd Apr 9, 2019
3ea963c
Merge branch '0.24-dev' into fix-aiohttp-test-race
jd Apr 9, 2019
624f8c7
Merge pull request #877 from jd/fix-aiohttp-test-race
jd Apr 9, 2019
346f43b
Merge branch '0.24-dev' into remove-nose
jd Apr 9, 2019
a304be5
Merge pull request #870 from jd/remove-nose
jd Apr 9, 2019
6a9f781
tests: add psycopg2 2.8 support
jd Apr 10, 2019
444647a
Enable requests integration by default
jd Apr 10, 2019
4c68385
Merge pull request #879 from jd/requests-by-default-on
jd Apr 10, 2019
a70595b
Merge branch '0.24-dev' into psycopg2-2.8
jd Apr 10, 2019
9938797
Merge pull request #878 from jd/psycopg2-2.8
jd Apr 10, 2019
5e032ee
[tests] Use a macro to persist result to workspace in CircleCI (#880)
jd Apr 11, 2019
2051e04
[core] Collect run-time metrics (#819)
Kyle-Verhoog Apr 11, 2019
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
332 changes: 72 additions & 260 deletions .circleci/config.yml

Large diffs are not rendered by default.

1 change: 0 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -41,7 +41,6 @@ htmlcov/
.coverage
.coverage.*
.cache
nosetests.xml
coverage.xml
*,cover
.hypothesis/
Expand Down
22 changes: 1 addition & 21 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -46,29 +46,9 @@ launch them through:
[docker-compose]: https://www.docker.com/products/docker-compose


#### Running the Tests in your local environment

Once docker is up and running you should be able to run the tests. To launch a
single test manually. For example to run the tests for `redis-py` 2.10 on Python
3.5 and 3.6:

$ tox -e '{py35,py36}-redis{210}'

To see the defined test commands see `tox.ini`.

To launch the complete test matrix run:

$ tox


#### Running Tests in docker

If you prefer not to setup your local machine to run tests, we provide a preconfigured docker image.
Note that this image is the same used in CircleCI to run tests.

You still need docker containers running additional services up and running.

Run the test runner
Once your docker-compose environment is running, you can run the test runner image:

$ docker-compose run --rm testrunner

Expand Down
7 changes: 2 additions & 5 deletions Rakefile
Original file line number Diff line number Diff line change
Expand Up @@ -12,13 +12,9 @@ S3_BUCKET = "pypi.datadoghq.com"

desc "release the a new wheel"
task :'release:wheel' do
# Use mkwheelhouse to build the wheel, push it to S3 then update the repo index
# If at some point, we need only the 2 first steps:
# - python setup.py bdist_wheel
# - aws s3 cp dist/*.whl s3://pypi.datadoghq.com/#{s3_dir}/
fail "Missing environment variable S3_DIR" if !S3_DIR or S3_DIR.empty?

# Use custom mkwheelhouse script to build and upload an sdist to S3 bucket
# Use custom `mkwheelhouse` to upload wheels and source distribution from dist/ to S3 bucket
sh "scripts/mkwheelhouse"
end

Expand Down Expand Up @@ -66,6 +62,7 @@ namespace :pypi do

task :build => :clean do
puts "building release in #{RELEASE_DIR}"
# TODO: Use `scripts/build-dist` instead to build sdist and wheels
sh "python setup.py -q sdist -d #{RELEASE_DIR}"
end

Expand Down
2 changes: 1 addition & 1 deletion ddtrace/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
from .tracer import Tracer
from .settings import config

__version__ = '0.23.0'
__version__ = '0.24.0'

# a global tracer instance with integration settings
tracer = Tracer()
Expand Down
7 changes: 6 additions & 1 deletion ddtrace/bootstrap/sitecustomize.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,10 @@


def update_patched_modules():
for patch in os.environ.get("DATADOG_PATCH_MODULES", '').split(','):
modules_to_patch = os.environ.get("DATADOG_PATCH_MODULES")
if not modules_to_patch:
return
for patch in modules_to_patch.split(','):
if len(patch.split(':')) != 2:
log.debug("skipping malformed patch instruction")
continue
Expand Down Expand Up @@ -95,6 +98,8 @@ def add_global_tags(tracer):
if priority_sampling:
opts["priority_sampling"] = asbool(priority_sampling)

opts['collect_metrics'] = asbool(get_env('runtime_metrics', 'enabled'))

if opts:
tracer.configure(**opts)

Expand Down
2 changes: 0 additions & 2 deletions ddtrace/commands/ddtrace_run.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,4 @@
#!/usr/bin/env python
from __future__ import print_function

from distutils import spawn
import os
import sys
Expand Down
2 changes: 1 addition & 1 deletion ddtrace/contrib/pyramid/patch.py
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,7 @@ def insert_tween_if_needed(settings):
# pyramid. We need our tween to be before it, otherwise unhandled
# exceptions will be caught before they reach our tween.
idx = tweens.find(pyramid.tweens.EXCVIEW)
if idx is -1:
if idx == -1:
settings['pyramid.tweens'] = tweens + '\n' + DD_TWEEN_NAME
else:
settings['pyramid.tweens'] = tweens[:idx] + DD_TWEEN_NAME + "\n" + tweens[idx:]
2 changes: 1 addition & 1 deletion ddtrace/contrib/tornado/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ def notify(self):
'analytics_enabled': False,
'settings': {
'FILTERS': [
FilterRequestsOnUrl(r'http://test\.example\.com'),
FilterRequestsOnUrl(r'http://test\\.example\\.com'),
],
},
},
Expand Down
6 changes: 3 additions & 3 deletions ddtrace/filters.py
Original file line number Diff line number Diff line change
Expand Up @@ -18,15 +18,15 @@ class FilterRequestsOnUrl(object):

To filter out http calls to domain api.example.com::

FilterRequestsOnUrl(r'http://api\.example\.com')
FilterRequestsOnUrl(r'http://api\\.example\\.com')

To filter out http calls to all first level subdomains from example.com::

FilterRequestOnUrl(r'http://.*+\.example\.com')
FilterRequestOnUrl(r'http://.*+\\.example\\.com')

To filter out calls to both http://test.example.com and http://example.com/healthcheck::

FilterRequestOnUrl([r'http://test\.example\.com', r'http://example\.com/healthcheck'])
FilterRequestOnUrl([r'http://test\\.example\\.com', r'http://example\\.com/healthcheck'])
"""
def __init__(self, regexps):
if isinstance(regexps, str):
Expand Down
2 changes: 1 addition & 1 deletion ddtrace/helpers.py
Original file line number Diff line number Diff line change
Expand Up @@ -32,6 +32,6 @@ def get_correlation_ids(tracer=None):
return None, None

span = tracer.current_span()
if span is None:
if not span:
return None, None
return span.trace_id, span.span_id
12 changes: 12 additions & 0 deletions ddtrace/internal/runtime/__init__.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
from .runtime_metrics import (
RuntimeTags,
RuntimeMetrics,
RuntimeWorker,
)


__all__ = [
'RuntimeTags',
'RuntimeMetrics',
'RuntimeWorker',
]
85 changes: 85 additions & 0 deletions ddtrace/internal/runtime/collector.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
import importlib

from ..logger import get_logger

log = get_logger(__name__)


class ValueCollector(object):
"""A basic state machine useful for collecting, caching and updating data
obtained from different Python modules.

The two primary use-cases are
1) data loaded once (like tagging information)
2) periodically updating data sources (like thread count)

Functionality is provided for requiring and importing modules which may or
may not be installed.
"""
enabled = True
periodic = False
required_modules = []
value = None
value_loaded = False

def __init__(self, enabled=None, periodic=None, required_modules=None):
self.enabled = self.enabled if enabled is None else enabled
self.periodic = self.periodic if periodic is None else periodic
self.required_modules = self.required_modules if required_modules is None else required_modules

self._modules_successfully_loaded = False
self.modules = self._load_modules()
if self._modules_successfully_loaded:
self._on_modules_load()

def _on_modules_load(self):
"""Hook triggered after all required_modules have been successfully loaded.
"""

def _load_modules(self):
modules = {}
try:
for module in self.required_modules:
modules[module] = importlib.import_module(module)
self._modules_successfully_loaded = True
except ImportError:
# DEV: disable collector if we cannot load any of the required modules
self.enabled = False
log.warn('Could not import module "{}" for {}. Disabling collector.'.format(module, self))
return None
return modules

def collect(self, keys=None):
"""Returns metrics as collected by `collect_fn`.

:param keys: The keys of the metrics to collect.
"""
if not self.enabled:
return self.value

keys = keys or set()

if not self.periodic and self.value_loaded:
return self.value

# call underlying collect function and filter out keys not requested
self.value = self.collect_fn(keys)

# filter values for keys
if len(keys) > 0 and isinstance(self.value, list):
self.value = [
(k, v)
for (k, v) in self.value
if k in keys
]

self.value_loaded = True
return self.value

def __repr__(self):
return '<{}(enabled={},periodic={},required_modules={})>'.format(
self.__class__.__name__,
self.enabled,
self.periodic,
self.required_modules,
)
46 changes: 46 additions & 0 deletions ddtrace/internal/runtime/constants.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
GC_COUNT_GEN0 = 'runtime.python.gc.count.gen0'
GC_COUNT_GEN1 = 'runtime.python.gc.count.gen1'
GC_COUNT_GEN2 = 'runtime.python.gc.count.gen2'

THREAD_COUNT = 'runtime.python.thread_count'
MEM_RSS = 'runtime.python.mem.rss'
CPU_TIME_SYS = 'runtime.python.cpu.time.sys'
CPU_TIME_USER = 'runtime.python.cpu.time.user'
CPU_PERCENT = 'runtime.python.cpu.percent'
CTX_SWITCH_VOLUNTARY = 'runtime.python.cpu.ctx_switch.voluntary'
CTX_SWITCH_INVOLUNTARY = 'runtime.python.cpu.ctx_switch.involuntary'

GC_RUNTIME_METRICS = set([
GC_COUNT_GEN0,
GC_COUNT_GEN1,
GC_COUNT_GEN2,
])

PSUTIL_RUNTIME_METRICS = set([
THREAD_COUNT,
MEM_RSS,
CTX_SWITCH_VOLUNTARY,
CTX_SWITCH_INVOLUNTARY,
CPU_TIME_SYS,
CPU_TIME_USER,
CPU_PERCENT,
])

DEFAULT_RUNTIME_METRICS = GC_RUNTIME_METRICS | PSUTIL_RUNTIME_METRICS

RUNTIME_ID = 'runtime-id'
SERVICE = 'service'
LANG_INTERPRETER = 'lang_interpreter'
LANG_VERSION = 'lang_version'

TRACER_TAGS = set([
RUNTIME_ID,
SERVICE,
])

PLATFORM_TAGS = set([
LANG_INTERPRETER,
LANG_VERSION
])

DEFAULT_RUNTIME_TAGS = TRACER_TAGS
92 changes: 92 additions & 0 deletions ddtrace/internal/runtime/metric_collectors.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
import os

from .collector import ValueCollector
from .constants import (
GC_COUNT_GEN0,
GC_COUNT_GEN1,
GC_COUNT_GEN2,
THREAD_COUNT,
MEM_RSS,
CTX_SWITCH_VOLUNTARY,
CTX_SWITCH_INVOLUNTARY,
CPU_TIME_SYS,
CPU_TIME_USER,
CPU_PERCENT,
)


class RuntimeMetricCollector(ValueCollector):
value = []
periodic = True


class GCRuntimeMetricCollector(RuntimeMetricCollector):
""" Collector for garbage collection generational counts

More information at https://docs.python.org/3/library/gc.html
"""
required_modules = ['gc']

def collect_fn(self, keys):
gc = self.modules.get('gc')

counts = gc.get_count()
metrics = [
(GC_COUNT_GEN0, counts[0]),
(GC_COUNT_GEN1, counts[1]),
(GC_COUNT_GEN2, counts[2]),
]

return metrics


class PSUtilRuntimeMetricCollector(RuntimeMetricCollector):
"""Collector for psutil metrics.

Performs batched operations via proc.oneshot() to optimize the calls.
See https://psutil.readthedocs.io/en/latest/#psutil.Process.oneshot
for more information.
"""
required_modules = ['psutil']
stored_value = dict(
CPU_TIME_SYS_TOTAL=0,
CPU_TIME_USER_TOTAL=0,
CTX_SWITCH_VOLUNTARY_TOTAL=0,
CTX_SWITCH_INVOLUNTARY_TOTAL=0,
)

def _on_modules_load(self):
self.proc = self.modules['psutil'].Process(os.getpid())

def collect_fn(self, keys):
with self.proc.oneshot():
# only return time deltas
# TODO[tahir]: better abstraction for metrics based on last value
cpu_time_sys_total = self.proc.cpu_times().system
cpu_time_user_total = self.proc.cpu_times().user
cpu_time_sys = cpu_time_sys_total - self.stored_value['CPU_TIME_SYS_TOTAL']
cpu_time_user = cpu_time_user_total - self.stored_value['CPU_TIME_USER_TOTAL']

ctx_switch_voluntary_total = self.proc.num_ctx_switches().voluntary
ctx_switch_involuntary_total = self.proc.num_ctx_switches().involuntary
ctx_switch_voluntary = ctx_switch_voluntary_total - self.stored_value['CTX_SWITCH_VOLUNTARY_TOTAL']
ctx_switch_involuntary = ctx_switch_involuntary_total - self.stored_value['CTX_SWITCH_INVOLUNTARY_TOTAL']

self.stored_value = dict(
CPU_TIME_SYS_TOTAL=cpu_time_sys_total,
CPU_TIME_USER_TOTAL=cpu_time_user_total,
CTX_SWITCH_VOLUNTARY_TOTAL=ctx_switch_voluntary_total,
CTX_SWITCH_INVOLUNTARY_TOTAL=ctx_switch_involuntary_total,
)

metrics = [
(THREAD_COUNT, self.proc.num_threads()),
(MEM_RSS, self.proc.memory_info().rss),
(CTX_SWITCH_VOLUNTARY, ctx_switch_voluntary),
(CTX_SWITCH_INVOLUNTARY, ctx_switch_involuntary),
(CPU_TIME_SYS, cpu_time_sys),
(CPU_TIME_USER, cpu_time_user),
(CPU_PERCENT, self.proc.cpu_percent()),
]

return metrics
Loading