- Newest
- Most votes
- Most comments
You can try the virtualenv method described here: https://docs.aws.amazon.com/lambda/latest/dg/lambda-python-how-to-create-deployment-package.html#python-package-venv
Make sure that when running the pip commands, you specify the flags to get lambda compatible wheel packages. for example
--python-version 27 --only-binary :all: --platform manylinux1_x86_64 --abi cp27mu
for python 2.7 or
--python-version 36 --only-binary :all: --platform manylinux1_x86_64 --abi cp36m
for python 3.6.
This will only work if all of the dependencies have compatible wheels. If they don't, you will have to build your deployment package in an environment compatible with lambda. See here for details: https://docs.aws.amazon.com/lambda/latest/dg/lambda-runtimes.html
Thanks for information. This definitely gives me a direction to work from.
I tried modifying my setup to run:
python -m pip install -r requirements.txt --python-version 36 --only-binary :all: --platform manylinux1_x86_64 --abi cp36m -t vendored
And I get:
Collecting gmusicapi (from -r requirements.txt (line 2))
ERROR: Could not find a version that satisfies the requirement gmusicapi (from -r requirements.txt (line 2)) (from versions: none)
ERROR: No matching distribution found for gmusicapi (from -r requirements.txt (line 2))
I still need to read up more on wheels. Am I correct in assuming this is the case where the package doesn't have a compatible wheel? If so, would I need to pull the repo for this package and set up wheels manually?
I'd like to avoid setting up a virtual box to develop my code if possible, but how much of a rabbit hole would I be jumping in to avoid doing so?
So I actually got the idea to move my resolved dependencies from my first install into a new requirements.txt
pip freeze > requirements.txt
.
When I rerun:
python -m pip install -r requirements.txt --no-cache-dir --python-version 36 --only-binary :all: --platform manylinux1_x86_64 --abi cp36m -t vendored
I get:
ERROR: Could not find a version that satisfies the requirement future==0.17.1 (from -r requirements.txt (line 7)) (from versions: none)
ERROR: No matching distribution found for future==0.17.1 (from -r requirements.txt (line 7))
Checking https://pythonwheels.com/ it looks like "future" does not have a wheel archive. Is this the actual root of my issue?
That's one of those ones where you'll have to build the wheel yourself. If you can determine that the package uses no native code, it doesn't matter where you build it. If it does use native code, a virtual box may not be enough. I've found it useful to spin up a tiny ec2 instance, build the package there, then save the generated wheel in a local pip repo for future builds.
And for the record, building wheels is fairly easy. modern pips with the wheel package installed will automatically create and cache wheels after installing packages, to speed up future installs. so just
pip install wheel
and then
pip install <packagename>
and that should be it.
So since your last response, I've been doing a lot of investigation into using Docker for my deployment. I'm assuming I could use this to replace the EC2 instance in your example. With this workflow, are you suggesting that I could potentially:
- spawn up a Docker container running amazonlinux
- compile the wheel for the problematic pip dependency (in amazonlinux)
- copy the contents from the container into my local serverless repo
- somehow deploy my serverless app using the new wheel?
I understand this is glancing over your recommendation about a local pip repo- I'm going to do more research into this to better understand what you mean here. Maybe this would simplify the workflow even further.
That's about the size of it. I'm unfortunately not familiar with the serverless framework, so I don't know how that part would work.
For the local pip repo, it can be as simple as an http server with a proper folder hierarchy: https://packaging.python.org/guides/hosting-your-own-index/. Once you have the properly built wheels in a local repo, you can tell pip to include the local repo in it's search.
Oh cool. So may be hard to automate as a robust solution, but maybe I can run some tests building a simple web server, building with docker, then moving the wheels over to the webserver.
I appreciate the help! I've definitely been learning a lot with this whole process. I tried a workflow where I could mimic my environment in Docker and deploy from it directly, but installing node (to install serverless) was way too long of a process for dev testing because the node install is huge. I ended up trying to build with the serverless-python-requirements plugin for serverless and got most of the way there. Definitely anything I import back to my local repo gets the opposite issue saying it can't find the macos .os files. But it's still a work in progress getting it to behave correctly in AWS.
Using the serverless-python-requirements plugin with pipenv and dockerization resolved my issue! I needed to go back and recreate my Pipfile.lock to fix some of my dependencies, but after doing so, I seem to have everything running again in dev and prod!
Thank you for all the help diagnosing this problem :)
Relevant content
- asked a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated a year ago
- AWS OFFICIALUpdated 3 months ago
Thanks a lot, i was getting crazy after almost 2 weeks, because i couldn't find any answer to a related package from pycryptodomex with the same error but after trying this, everything started to work flawless.