-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Gitpod cloud development evironment #2439
Conversation
Hmm, it looks like the frontend is fetching from old workspace urls. |
.gitpod.yml
Outdated
npm run start | ||
docker-compose --env-file .env up -d --no-dep nginx | ||
docker-compose --env-file .env up -d | ||
gp await-port 9200 && npm run start |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Add a comment here that we're waiting on Elasticsearch, so it's clear why you're doing this. Having said that, I'm not clear if we need this. @manekenpix has code in our Elasticsearch client that retries connections.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On some occasions, the npm run start
script will cancel itself before Elasticsearch has a chance to start and be connected to. I don't know if the retries are based on a certain number of attempts or after a certain amount of time. For some reason, we usually need to wait longer for Elasticsearch
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should probably file an issue to bump the number of retries we do, see
telescope/src/backend/utils/indexer.js
Lines 127 to 167 in 29951b7
/** | |
* Checks elasticsearch's connectivity | |
*/ | |
const checkConnection = () => client.cluster.health(); | |
const waitOnReady = async () => { | |
/** | |
* Elasticsearch needs time after deployment for setting up all its components. | |
* Here we set a timer using 'setTimeout' and check for connectivity during the countdown so elasticsearch | |
* has time to be fully prepared to start indexing posts. | |
*/ | |
const DELAY = process.env.ELASTIC_DELAY_MS || 10000; | |
let intervalId; | |
let timerId; | |
const timer = new Promise((resolve, reject) => { | |
timerId = setTimeout(() => { | |
reject( | |
new Error( | |
'Unable to connect to Elasticsearch. Use `MOCK_ELASTIC=1` in your `.env` to mock Elasticsearch, or install (see https://github.com/Seneca-CDOT/telescope/blob/master/docs/environment-setup.md)' | |
) | |
); | |
}, DELAY); | |
}); | |
const connectivity = new Promise((resolve) => { | |
intervalId = setIntervalAsync(async () => { | |
try { | |
await checkConnection(); | |
resolve(); | |
} catch (error) { | |
logger.info('Attempting to connect to elasticsearch...'); | |
} | |
}, 500); | |
}); | |
await Promise.race([timer, connectivity]); | |
await clearIntervalAsync(intervalId); | |
clearTimeout(timerId); | |
}; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The back-end uses this to determine for how long it should try to connect to ES (it's time, not number of attempts). If it can't connect, the back-end exits, BUT in production/staging we run the back-end in a container with unless-stopped, so it tries again after shutting down.
.gitpod.yml
Outdated
command: | | ||
docker-compose --env-file config/env.gitpod up -d | ||
npm run start | ||
docker-compose --env-file .env up -d --no-dep nginx |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you explain why you're running docker-compose
multiple times here?
What if we use npm run dev
instead of building the nginx-based frontend?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What we really need to do is rework our microservices so that we're building and putting in a container registry from CI vs. always building them from scratch. cc @raygervais for any ideas on how to approach this in a follow-up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See #2264
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What if we use npm run dev instead of building the nginx-based frontend?
That's a very good idea. I didn't think of that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What we really need to do is rework our microservices so that we're building and putting in a container registry from CI.
Current build time is somewhere around 20 minutes, which is unbearable.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
20 min!? Yeah, we need to pull cached builds vs. creating our own.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@humphd, is there any way I can start all services except the frontend? Or I'd have to list all services by hand.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not aware of a way to do this without saying what you want. What you did is how I'd do it.
eval $(gp env -e WEB_URL=$(gp url 8000)) | ||
eval $(gp env -e API_HOST=$(gp url 8443)) | ||
|
||
sed -r \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clever!
I'm testing it out on my machine too but I've been hitting this CORS error all day. Even in the AWS issues #2398 #2390 that are kinda similar to this PR here. No idea if it's on my end or what. In your
|
@cindyledev, notice the urls different, they are
I wonder if this would affect anything? I'm currently replacing localhost:3000/8000/8443 only. |
I'm able to hit
and
but for some reason, the Next.js frontend is not fetching any data. I wonder where it's trying to get data from... |
It should be hitting https://8443-lavender-lemur-niijfpmt.ws-us17.gitpod.io/v1/posts (works for me here).
This has the proper CORS header. |
Silly question. Would Next.js need |
My working server with npm run dev here, it's hitting the corrects urls. |
I see it!! Wow! |
Oops, CI failing. |
.gitpod.yml
Outdated
eval $(gp env -e API_URL=$(gp url 3000)) | ||
eval $(gp env -e WEB_URL=$(gp url 8000)) | ||
eval $(gp env -e API_HOST=$(gp url 8443)) | ||
eval $(gp env -e SERVICES="status image auth search posts feed-discovery users parser planet redis elasticsearch login firebase") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we want to try and trim this down a bit we could remove planet
. It's not needed in development (unless you're specifically working with it), which almost no one will be.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I trimmed planet.
Can you rebase to pick up the changes I had to make for the parser being disabled? |
Let me do that again, I didn't disable the parser. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks right to me.
I got this working (my first time using GitPod, so cool)!: I ran into a few things:
It would be good to solve this, as most new devs won't have any idea how to solve this. Also, it's slow to start. We need to improve. But it will work! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wow, this is finally working on my end!!!
Merged! |
Issue This PR Addresses
Fixes #2427.
I think Gitpod is now fully working as intended.
The cool thing is we don't need
gitpod.env
anymore. Gitpod now usesenv.development
as a base and updates the URLs every time a workspace starts.